convolutional-neural-network

Difference between Dense(2) and Dense(1) as the final layer of a binary classification CNN?

大城市里の小女人 提交于 2020-08-01 03:51:49
问题 In a CNN for binary classification of images, should the shape of output be (number of images, 1) or (number of images, 2)? Specifically, here are 2 kinds of last layer in a CNN: keras.layers.Dense(2, activation = 'softmax')(previousLayer) or keras.layers.Dense(1, activation = 'softmax')(previousLayer) In the first case, for every image there are 2 output values (probability of belonging to group 1 and probability of belonging to group 2). In the second case, each image has only 1 output

why not use the max value of output tensor instead of Softmax Function?

旧时模样 提交于 2020-01-15 11:06:36
问题 I built a CNN model on images one-class classification. The output tensor is a list which has 65 elements. I make this tensor input to Softmax Function, and got the classified result. I think the max value in this output tensor is the classified result, why not use this way to do classification task? Just the Softmax Function can be taken the derivative easily? 回答1: Softmax is used for multi-class classification. In multi-class class classification the model is expected to classify the input

why not use the max value of output tensor instead of Softmax Function?

醉酒当歌 提交于 2020-01-15 11:05:12
问题 I built a CNN model on images one-class classification. The output tensor is a list which has 65 elements. I make this tensor input to Softmax Function, and got the classified result. I think the max value in this output tensor is the classified result, why not use this way to do classification task? Just the Softmax Function can be taken the derivative easily? 回答1: Softmax is used for multi-class classification. In multi-class class classification the model is expected to classify the input

why not use the max value of output tensor instead of Softmax Function?

一笑奈何 提交于 2020-01-15 11:04:33
问题 I built a CNN model on images one-class classification. The output tensor is a list which has 65 elements. I make this tensor input to Softmax Function, and got the classified result. I think the max value in this output tensor is the classified result, why not use this way to do classification task? Just the Softmax Function can be taken the derivative easily? 回答1: Softmax is used for multi-class classification. In multi-class class classification the model is expected to classify the input

How to Feed Batched Sequences of Images through Tensorflow conv2d

纵然是瞬间 提交于 2020-01-14 10:43:49
问题 This seems like a trivial question, but I've been unable to find the answer. I have batched sequences of images of shape: [batch_size, number_of_frames, frame_height, frame_width, number_of_channels] and I would like to pass each frame through a few convolutional and pooling layers. However, TensorFlow's conv2d layer accepts 4D inputs of shape: [batch_size, frame_height, frame_width, number_of_channels] My first attempt was to use tf.map_fn over axis=1, but I discovered that this function

Image preprocessing in convolutional neural network yields lower accuracy in Keras vs Tflearn

不羁的心 提交于 2019-12-24 18:19:48
问题 I'm trying to convert this tflearn DCNN sample (using image preprocessing and augmemtation) to keras: Tflearn sample: import tflearn from tflearn.data_utils import shuffle, to_categorical from tflearn.layers.core import input_data, dropout, fully_connected from tflearn.layers.conv import conv_2d, max_pool_2d from tflearn.layers.estimator import regression from tflearn.data_preprocessing import ImagePreprocessing from tflearn.data_augmentation import ImageAugmentation # Data loading and

Re-train model with new classes

故事扮演 提交于 2019-12-24 01:19:30
问题 I have built an image classifier with 2 classes, say 'A' and 'B'. I have also saved this model, using model.save(). Now, after a certain time, the requirement arose to add one more class 'C'. Is it possible to load_model() and then add only one class to the previously saved model so that we have the final model with 3 classes ('A','B' and 'C'), without having to retrain the whole model, for classes 'A and 'B' again? Can anyone help? I have tried this: I used vgg16 as a base model and pop out

Perform multi-scale training (yolov2)

末鹿安然 提交于 2019-12-21 06:38:11
问题 I am wondering how the multi-scale training in YOLOv2 works. In the paper, it is stated that: The original YOLO uses an input resolution of 448 × 448. ith the addition of anchor boxes we changed the resolution to 416×416. However, since our model only uses convolutional and pooling layers it can be resized on the fly . We want YOLOv2 to be robust to running on images of different sizes so we train this into the model. Instead of fixing the input image size we change the network every few

Conv2D transpose output shape using formula

≡放荡痞女 提交于 2019-12-18 07:19:30
问题 I get [-1,256,256,3] as the output shape using the transpose layers shown below. I print the output shape. My question is specifically about the height and width which are both 256 . The channels seem to be the number of filters from the last transpose layer in my code. I assumed rather simplistically that the formula is this. I read other threads. H = (H1 - 1)*stride + HF - 2*padding But when I calculate I don't seem to get that output. I think I may be missing the padding calculation How

Specify some untrainable filters for Keras convolutional network

这一生的挚爱 提交于 2019-12-11 07:35:40
问题 I would like to develop a convolutional network architecture where in the first layer (Conv1D in this case), I would like to prespecify some portion of untrainable fixed filters, while also having several trainable filters that the model can learn. Is this possible and how would this be done? My intuition is that I can make two separate Conv1D layers - one trainable and one untrainable - and then somehow concatenate them, but I'm not sure what this would look like in code. Also, for the