deep-learning

How to resize a tiff image with multiple channels?

白昼怎懂夜的黑 提交于 2021-01-24 14:07:28
问题 I have a tiff image of size 21 X 513 X 513 where (513, 513) is the height and width of the image containing 21 channels. How can I resize this image to 21 X 500 X 375? I am trying to use PILLOW to do so. But can't figure out if I am doing something wrong. >>> from PIL import Image >>> from tifffile import imread >>> img = Image.open('new.tif') >>> img <PIL.TiffImagePlugin.TiffImageFile image mode=F size=513x513 at 0x7FB0C8E5B940> >>> resized_img = img.resize((500, 375), Image.ANTIALIAS) >>>

TypeError: __init__() missing 1 required positional argument: 'units'

半城伤御伤魂 提交于 2021-01-24 11:39:11
问题 I am working in python and tensor flow but I miss 'units' argument and I do not know how to solve it, It looks like your post is mostly code; please add some more details.It looks like your post is mostly code; please add some more details. here the code def createModel(): model = Sequential() # first set of CONV => RELU => MAX POOL layers model.add(Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=inputShape)) model.add(Conv2D(32, (3, 3), activation='relu')) model.add

TypeError: __init__() missing 1 required positional argument: 'units'

[亡魂溺海] 提交于 2021-01-24 11:37:40
问题 I am working in python and tensor flow but I miss 'units' argument and I do not know how to solve it, It looks like your post is mostly code; please add some more details.It looks like your post is mostly code; please add some more details. here the code def createModel(): model = Sequential() # first set of CONV => RELU => MAX POOL layers model.add(Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=inputShape)) model.add(Conv2D(32, (3, 3), activation='relu')) model.add

Convolutional Neural Network visualization - weights or activations?

て烟熏妆下的殇ゞ 提交于 2021-01-24 08:18:48
问题 Is the above visualization a rendering of the weights of the first convolutional layer or the activations on a given input image on the first convolutional layer? Below is a visualization of the weights of the first convolutional layer of the Inception v2 model that I've been training for 48 hours: I'm sure I have not converged my model after only 48 hours (on a CPU). Shouldn't those weights begin to smooth out by now, where training accuracy is over 90%? 回答1: According to ImageNet

Convolutional Neural Network visualization - weights or activations?

笑着哭i 提交于 2021-01-24 08:18:06
问题 Is the above visualization a rendering of the weights of the first convolutional layer or the activations on a given input image on the first convolutional layer? Below is a visualization of the weights of the first convolutional layer of the Inception v2 model that I've been training for 48 hours: I'm sure I have not converged my model after only 48 hours (on a CPU). Shouldn't those weights begin to smooth out by now, where training accuracy is over 90%? 回答1: According to ImageNet

Best loss function for multi-class classification when the dataset is imbalance?

谁说我不能喝 提交于 2021-01-24 07:42:31
问题 I'm currently using the Cross Entropy Loss function but with the imbalance data-set the performance is not great. Is there better lost function? 回答1: It's a very broad subject, but IMHO, you should try focal loss: It was introduced by Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar to handle imbalance prediction in object detection. Since introduced it was also used in the context of segmentation. The idea of the focal loss is to reduce both loss and gradient for correct

Best loss function for multi-class classification when the dataset is imbalance?

被刻印的时光 ゝ 提交于 2021-01-24 07:40:09
问题 I'm currently using the Cross Entropy Loss function but with the imbalance data-set the performance is not great. Is there better lost function? 回答1: It's a very broad subject, but IMHO, you should try focal loss: It was introduced by Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar to handle imbalance prediction in object detection. Since introduced it was also used in the context of segmentation. The idea of the focal loss is to reduce both loss and gradient for correct

Tensorflow: Setting allow_growth to true does still allocate memory of all my GPUs

◇◆丶佛笑我妖孽 提交于 2021-01-23 11:09:09
问题 I have several GPUs but I only want to use one GPU for my training. I am using following options: config = tf.ConfigProto(allow_soft_placement=True, log_device_placement=True) config.gpu_options.allow_growth = True with tf.Session(config=config) as sess: Despite setting / using all these options, all of my GPUs allocate memory and #processes = #GPUs How can I prevent this from happening? Note I do not want use set the devices manually and I do not want to set CUDA_VISIBLE_DEVICES since I want

Tensorflow: Setting allow_growth to true does still allocate memory of all my GPUs

送分小仙女□ 提交于 2021-01-23 11:02:41
问题 I have several GPUs but I only want to use one GPU for my training. I am using following options: config = tf.ConfigProto(allow_soft_placement=True, log_device_placement=True) config.gpu_options.allow_growth = True with tf.Session(config=config) as sess: Despite setting / using all these options, all of my GPUs allocate memory and #processes = #GPUs How can I prevent this from happening? Note I do not want use set the devices manually and I do not want to set CUDA_VISIBLE_DEVICES since I want

Getting different results from Keras model.evaluate and model.predict

安稳与你 提交于 2021-01-23 05:07:23
问题 I have trained a model to predict topic categories using word2vec and an lstm model using keras and got about 98% accuracy during training, I saved the model then loaded it into another file for trying on the test set, I used model.evaluate and model.predict and the results were very different. I'm using keras with tensorflow as a backend, the model summary is: _________________________________________________________________ Layer (type) Output Shape Param # =================================