deep-learning

Why Tensorflow not running on GPU while GPU devices are identified in python?

北战南征 提交于 2020-08-26 10:42:46
问题 I installed TensorFlow 2.2.0 and TensorFlow-gpu 2.2.0 in windows 10 . Also, I installed CUDA Toolkit v10.1 and copy cuDNN 7.6.5 files in CUDA directories . My GPU is NVIDIA GeForce 940 MX . In addition, I set CUDA Path on windows. When I test devices through the below code, both CPU and GPU are recognized: from tensorflow.python.client import device_lib device_lib.list_local_devices() The output is: [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation:

PyTorch - How to deactivate dropout in evaluation mode

杀马特。学长 韩版系。学妹 提交于 2020-08-22 09:34:11
问题 This is the model I defined it is a simple lstm with 2 fully connect layers. import copy import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class mylstm(nn.Module): def __init__(self,input_dim, output_dim, hidden_dim,linear_dim): super(mylstm, self).__init__() self.hidden_dim=hidden_dim self.lstm=nn.LSTMCell(input_dim,self.hidden_dim) self.linear1=nn.Linear(hidden_dim,linear_dim) self.linear2=nn.Linear(linear_dim,output_dim) def forward(self, input)

Tensorflow: loss decreasing, but accuracy stable

 ̄綄美尐妖づ 提交于 2020-08-22 03:25:00
问题 My team is training a CNN in Tensorflow for binary classification of damaged/acceptable parts. We created our code by modifying the cifar10 example code. In my prior experience with Neural Networks, I always trained until the loss was very close to 0 (well below 1). However, we are now evaluating our model with a validation set during training (on a separate GPU), and it seems like the precision stopped increasing after about 6.7k steps, while the loss is still dropping steadily after over

Tensorflow: loss decreasing, but accuracy stable

核能气质少年 提交于 2020-08-22 03:24:33
问题 My team is training a CNN in Tensorflow for binary classification of damaged/acceptable parts. We created our code by modifying the cifar10 example code. In my prior experience with Neural Networks, I always trained until the loss was very close to 0 (well below 1). However, we are now evaluating our model with a validation set during training (on a separate GPU), and it seems like the precision stopped increasing after about 6.7k steps, while the loss is still dropping steadily after over

what is the difference between Flatten() and GlobalAveragePooling2D() in keras

ぃ、小莉子 提交于 2020-08-21 06:05:07
问题 I want to pass the output of ConvLSTM and Conv2D to a Dense Layer in Keras, what is the difference between using global average pooling and flatten Both is working in my case. model.add(ConvLSTM2D(filters=256,kernel_size=(3,3))) model.add(Flatten()) # or model.add(GlobalAveragePooling2D()) model.add(Dense(256,activation='relu')) 回答1: That both seem to work doesn't mean they do the same. Flatten will take a tensor of any shape and transform it into a one dimensional tensor (plus the samples

How do I determine the binary class predicted by a convolutional neural network on Keras?

泪湿孤枕 提交于 2020-08-17 07:14:48
问题 I'm building a CNN to perform sentiment analysis on Keras. Everything is working perfectly, the model is trained and ready to be launched to production. However, when I try to predict on new unlabelled data by using the method model.predict() it only outputs the associated probability. I tried to use the method np.argmax() but it always outputs 0 even when it should be 1 (on test set, my model achieved 80% of accuracy). Here is my code to pre-process the data: # Pre-processing data x = df[df

How do I determine the binary class predicted by a convolutional neural network on Keras?

你离开我真会死。 提交于 2020-08-17 07:13:09
问题 I'm building a CNN to perform sentiment analysis on Keras. Everything is working perfectly, the model is trained and ready to be launched to production. However, when I try to predict on new unlabelled data by using the method model.predict() it only outputs the associated probability. I tried to use the method np.argmax() but it always outputs 0 even when it should be 1 (on test set, my model achieved 80% of accuracy). Here is my code to pre-process the data: # Pre-processing data x = df[df

Grid Search for Keras with multiple inputs

人盡茶涼 提交于 2020-08-17 04:35:35
问题 I am trying to do a grid search over my hyperparameters for tuning a deep learning architecture. I have multiple input options to the model and I am trying to use sklearn's grid search api. The problem is, grid search api only takes single array as input and the code fails while it checks for the data size dimension.(My input dimension is 5*number of data points while according to sklearn api, it should be number of data points*feature dimension). My code looks something like this: from keras

How to read the label(annotation) file from Synthia Dataset?

流过昼夜 提交于 2020-08-11 03:18:06
问题 I am new to Synthia dataset. I would like to read the label file from this datset. I expect to have one channel matrix with size of my RGB image, but when I load the data I got 3x760x1280 and it is full of zeros. I tried to read as belows: label = np.asarray(imread(label_path)) Can anyone help to read these labels file correctly? 回答1: I found the right way to read it as below: label = np.asarray(imageio.imread(label_path, format='PNG-FI'))[:,:,0] 来源: https://stackoverflow.com/questions

How to convert raw code into function(s) example

老子叫甜甜 提交于 2020-08-10 03:37:12
问题 I have just started learning how to code in Python and would appreciate if anyone could give me a brief explanation/hint on how to convert raw code into function(s). Example machine learning code: # create model model = Sequential() model.add(Dense(neurons, input_dim=8, kernel_initializer='uniform', activation='linear', kernel_constraint=maxnorm(4))) model.add(Dropout(0.2)) model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid')) # Compile model model.compile(loss='binary