conv-neural-network

GAN generates exactly the same Images cross a batch only because of seeds distribution, Why?

≡放荡痞女 提交于 2021-01-07 00:07:50
问题 I have trained a GAN to reproduce CIFAR10 like images. Initially I notice all images cross one batch produced by the generator look always the same, like the picture below: After hours of debugging and comparison to the tutorial which is a great learning source for beginners (https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-a-cifar-10-small-object-photographs-from-scratch/), I just add only one letter on my original code and the generated images start

Understanding weird YOLO convolutional layer output size

和自甴很熟 提交于 2021-01-05 09:15:47
问题 I am trying to understand how Darknet works, and I was looking at the yolov3-tiny configuration file, specifically the layer number 13 (line 107). [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky The size of the kernel is 1x1, the stride is 1 and the padding is 1 too. When I load the network using darknet, it indicates that the output width and height are the same as the input: 13 conv 256 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 256 However, shouldn't the width

My deep learning model is not training. How do I make it train?

∥☆過路亽.° 提交于 2021-01-05 09:10:45
问题 I'm fairly new to Keras, please excuse me if I made a fundamental error. So, my model has 3 Convolutional (2D) layers and 4 Dense Layers, interspersed with Dropout Layers. I am trying to train a Regression Model using images. X_train.shape = (5164, 160, 320, 3) y_train.shape = (5164) from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, Activation, MaxPooling2D, Dropout import tensorflow.compat.v1 as tf tf.disable_v2_behavior() from tensorflow

My deep learning model is not training. How do I make it train?

社会主义新天地 提交于 2021-01-05 09:09:16
问题 I'm fairly new to Keras, please excuse me if I made a fundamental error. So, my model has 3 Convolutional (2D) layers and 4 Dense Layers, interspersed with Dropout Layers. I am trying to train a Regression Model using images. X_train.shape = (5164, 160, 320, 3) y_train.shape = (5164) from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, Activation, MaxPooling2D, Dropout import tensorflow.compat.v1 as tf tf.disable_v2_behavior() from tensorflow

What does scale and step values in .cfg file of YOLO means?

别等时光非礼了梦想. 提交于 2021-01-05 07:01:45
问题 I am trying to understand the .cfg file of YOLOv2. I didn't understand steps=-1,100,80000,100000 scales=.1,10,.1,.1 Can someone explain me this. 回答1: steps is a checkpoints (number of iterations) at which scales will be applied. scales is a coefficients at which learning_rate will be multiplied at this checkpoints. Determines how the learning_rate will be changed during increasing number of iterations during training. Both of them are related to each other and they have the same amount. steps

What is DepthwiseConv2D and SeparableConv2D? How is it different from normal Conv2D layer in keras?

左心房为你撑大大i 提交于 2021-01-04 03:12:48
问题 I was looking through the architecture of EfficientnetB0 and noticed DepthwiseConv2D operation. Did some digging and found that there's also a SeparableConv2D. What exactly are these operations? 来源: https://stackoverflow.com/questions/61967172/what-is-depthwiseconv2d-and-separableconv2d-how-is-it-different-from-normal-con

What is DepthwiseConv2D and SeparableConv2D? How is it different from normal Conv2D layer in keras?

我们两清 提交于 2021-01-04 03:12:14
问题 I was looking through the architecture of EfficientnetB0 and noticed DepthwiseConv2D operation. Did some digging and found that there's also a SeparableConv2D. What exactly are these operations? 来源: https://stackoverflow.com/questions/61967172/what-is-depthwiseconv2d-and-separableconv2d-how-is-it-different-from-normal-con

keras giving same loss on every epoch

泄露秘密 提交于 2021-01-02 06:04:07
问题 I am newbie to keras. I ran it on a dataset where my objective was to reduce the logloss. For every epoch it is giving me the same loss value. I am confused whether i am on the right track or not. For example: Epoch 1/5 91456/91456 [==============================] - 142s - loss: 3.8019 - val_loss: 3.8278 Epoch 2/5 91456/91456 [==============================] - 139s - loss: 3.8019 - val_loss: 3.8278 Epoch 3/5 91456/91456 [==============================] - 143s - loss: 3.8019 - val_loss: 3.8278

Convolutional layer in Python using Numpy

爱⌒轻易说出口 提交于 2021-01-01 13:33:08
问题 I am trying to implement a convolutional layer in Python using Numpy. The input is a 4-dimensional array of shape [N, H, W, C] , where: N : Batch size H : Height of image W : Width of image C : Number of channels The convolutional filter is also a 4-dimensional array of shape [F, F, Cin, Cout] , where F : Height and width of a square filter Cin : Number of input channels ( Cin = C ) Cout : Number of output channels Assuming a stride of one along all axes, and no padding, the output should be

Convolutional layer in Python using Numpy

核能气质少年 提交于 2021-01-01 13:31:36
问题 I am trying to implement a convolutional layer in Python using Numpy. The input is a 4-dimensional array of shape [N, H, W, C] , where: N : Batch size H : Height of image W : Width of image C : Number of channels The convolutional filter is also a 4-dimensional array of shape [F, F, Cin, Cout] , where F : Height and width of a square filter Cin : Number of input channels ( Cin = C ) Cout : Number of output channels Assuming a stride of one along all axes, and no padding, the output should be