deep-learning

Training with dropout

守給你的承諾、 提交于 2021-01-27 12:46:15
问题 How are the many thinned layers resulting from dropout averaged? And which weights are to be used during the testing stage? I'm really confused about this one. Because each thinned layers would learn a different set of weights. So backpropagation is done separately for each of the thinned networks? And how exactly are weights shared among these thinned networks? Because at testing time only one neural network is used and one set of weights. So which set of weights are used? It is said that a

How to print the “actual” learning rate in Adadelta in pytorch

試著忘記壹切 提交于 2021-01-27 12:41:08
问题 In short : I can't draw lr/epoch curve when using adadelta optimizer in pytorch because optimizer.param_groups[0]['lr'] always return the same value. In detail : Adadelta can dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent [1]. In pytorch, the source code of Adadelta is here https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html#Adadelta Since it requires no manual tuning of learning

Recurrent neural networks for Time Series with Multiple Variables - TensorFlow

北慕城南 提交于 2021-01-27 05:42:27
问题 I'm using previous demand to predict future demand, using 3 variables , but whenever I run the code my Y axis shows error If I use only one variable on the Y axis separately it has no error. Example: demandaY = bike_data[['cnt']] n_steps = 20 for time_step in range(1, n_steps+1): demandaY['cnt'+str(time_step)] = demandaY[['cnt']].shift(-time_step).values y = demandaY.iloc[:, 1:].values y = np.reshape(y, (y.shape[0], n_steps, 1)) DATASET SCRIPT features = ['cnt','temp','hum'] demanda = bike

Efficient allreduce is not supported for 2 IndexedSlices

佐手、 提交于 2021-01-26 04:13:55
问题 I am trying to run a Subclassed Keras Model on multiple GPUs. The code is running as expected, however, the following "warning" crops up during the execution of the code: "Efficient allreduce is not supported for 2 IndexedSlices" What does this mean? I followed the Multi-GPU tutorial on Tensorflow 2.0 Beta guide. I am also using the Dataset API for my input pipeline. 来源: https://stackoverflow.com/questions/56843876/efficient-allreduce-is-not-supported-for-2-indexedslices

Efficient allreduce is not supported for 2 IndexedSlices

一笑奈何 提交于 2021-01-26 04:13:53
问题 I am trying to run a Subclassed Keras Model on multiple GPUs. The code is running as expected, however, the following "warning" crops up during the execution of the code: "Efficient allreduce is not supported for 2 IndexedSlices" What does this mean? I followed the Multi-GPU tutorial on Tensorflow 2.0 Beta guide. I am also using the Dataset API for my input pipeline. 来源: https://stackoverflow.com/questions/56843876/efficient-allreduce-is-not-supported-for-2-indexedslices

Why doesn't custom training loop average loss over batch_size?

妖精的绣舞 提交于 2021-01-25 20:25:05
问题 Below code snippet is the custom training loop from Tensorflow official tutorial.https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch . Another tutorial also does not average loss over batch_size , as shown here https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough Why is the loss_value not averaged over batch_size at this line loss_value = loss_fn(y_batch_train, logits) ? Is this a bug? From another question here Loss function works with reduce

Why doesn't custom training loop average loss over batch_size?

非 Y 不嫁゛ 提交于 2021-01-25 20:22:19
问题 Below code snippet is the custom training loop from Tensorflow official tutorial.https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch . Another tutorial also does not average loss over batch_size , as shown here https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough Why is the loss_value not averaged over batch_size at this line loss_value = loss_fn(y_batch_train, logits) ? Is this a bug? From another question here Loss function works with reduce

LSTM for time-series prediction failing to learn (PyTorch)

社会主义新天地 提交于 2021-01-24 20:16:31
问题 I'm currently working on building an LSTM network to forecast time-series data using PyTorch. I tried to share all the code pieces that I thought would be helpful, but please feel free to let me know if there's anything further I can provide. I added some comments at the end of the post regarding what the underlying issue might be. From the univariate time-series data indexed by date, I created 3 date features and split the data into training and validation sets as below. # X_train weekday

How to resize a tiff image with multiple channels?

橙三吉。 提交于 2021-01-24 14:12:31
问题 I have a tiff image of size 21 X 513 X 513 where (513, 513) is the height and width of the image containing 21 channels. How can I resize this image to 21 X 500 X 375? I am trying to use PILLOW to do so. But can't figure out if I am doing something wrong. >>> from PIL import Image >>> from tifffile import imread >>> img = Image.open('new.tif') >>> img <PIL.TiffImagePlugin.TiffImageFile image mode=F size=513x513 at 0x7FB0C8E5B940> >>> resized_img = img.resize((500, 375), Image.ANTIALIAS) >>>

How to resize a tiff image with multiple channels?

蹲街弑〆低调 提交于 2021-01-24 14:08:33
问题 I have a tiff image of size 21 X 513 X 513 where (513, 513) is the height and width of the image containing 21 channels. How can I resize this image to 21 X 500 X 375? I am trying to use PILLOW to do so. But can't figure out if I am doing something wrong. >>> from PIL import Image >>> from tifffile import imread >>> img = Image.open('new.tif') >>> img <PIL.TiffImagePlugin.TiffImageFile image mode=F size=513x513 at 0x7FB0C8E5B940> >>> resized_img = img.resize((500, 375), Image.ANTIALIAS) >>>