convolutional-neural-network

Reflection padding Conv2D

心已入冬 提交于 2019-12-10 15:44:11
问题 I'm using keras to build a convolutional neural network for image segmentation and I want to use "reflection padding" instead of padding "same" but I cannot find a way to to do it in keras. inputs = Input((num_channels, img_rows, img_cols)) conv1=Conv2D(32,3,padding='same',kernel_initializer='he_uniform',data_format='channels_first')(inputs) Is there a way to implement a reflection layer and insert it in a keras model ? 回答1: The accepted answer above is not working in the current Keras

Unet segmentation model predicts blank image? [duplicate]

大憨熊 提交于 2019-12-08 14:09:07
问题 This question already has an answer here : U-net low contrast test images, predict output is grey box (1 answer) Closed 10 days ago . I m using Unet architecture for lung segmentation it show me better training and Val loss but when I call predict function and give one image of training set as input.its gives me blank image as output .I am understanding why is doing so when it show good validation accuracy. I'm using keras 回答1: Accuracy is not a good metrics for segmentation, especially for

merge different models with different inputs Keras

天涯浪子 提交于 2019-12-08 13:15:04
问题 I would like to train two different Conv models in Keras with different input dimensions. I have: input_size=4 input_sizeB=6 model=Sequential() model.add(Conv2D(filters=10,input_shape= (1,time_steps,input_size),kernel_size(24,3),activation='relu',data_format='channels_first',kernel_regularizer=regularizers.l2(0.001))) model.add(Flatten()) A= model.add(Dense(25, activation='tanh',kernel_regularizer=regularizers.l2(0.003))) model2=Sequential() model2.add(Conv2D(filters=10,input_shape= (1,time

how to save, restore, make predictions with siamese network (with triplet loss)

為{幸葍}努か 提交于 2019-12-08 09:39:37
问题 I am trying to develop a siamese network for simple face verification (and recognition in the second stage). I have a network in place that I managed to train but I am a bit puzzled when it comes to how to save and restore the model + making predictions with the trained model. Hoping that maybe an experienced person in the domain can help to make progress.. Here is how I create my siamese network, to begin with... model = ResNet50(weights='imagenet') # get the original ResNet50 model model

Transfer learning why remove last hidden layer?

风流意气都作罢 提交于 2019-12-02 15:23:39
问题 Often when reading blogs about transfer learning it says - remove the last layer, or remove the last two layers. That is, remove output layer and last hidden layer. So if the transfer learning implies changing the cost function also, e.g. from cross-entropy to mean squared errro, I understand that you need to change the last output layer from 1001 layer of softmax values to a Dense(1) layer which outputs a float, but: why also change the last hidden layer? what weights is the two last new

Transfer learning why remove last hidden layer?

吃可爱长大的小学妹 提交于 2019-12-02 12:26:35
Often when reading blogs about transfer learning it says - remove the last layer, or remove the last two layers. That is, remove output layer and last hidden layer. So if the transfer learning implies changing the cost function also, e.g. from cross-entropy to mean squared errro, I understand that you need to change the last output layer from 1001 layer of softmax values to a Dense(1) layer which outputs a float, but: why also change the last hidden layer? what weights is the two last new layers get initialized with if using Keras and one of the predefined CNN models with imagenet weights? He

Conv2D transpose output shape using formula

笑着哭i 提交于 2019-11-29 12:49:34
I get [-1,256,256,3] as the output shape using the transpose layers shown below. I print the output shape. My question is specifically about the height and width which are both 256 . The channels seem to be the number of filters from the last transpose layer in my code. I assumed rather simplistically that the formula is this. I read other threads. H = (H1 - 1)*stride + HF - 2*padding But when I calculate I don't seem to get that output. I think I may be missing the padding calculation How much padding is added by 'SAME' ? My code is this. linear = tf.layers.dense(z, 512 * 8 * 8) linear = tf

How to calculate the total number of parameters in a convolutional neural network?

瘦欲@ 提交于 2019-11-29 12:36:35
how to calculate the total number of params in a CNN network here is the code: input_shape = (32, 32, 1) flat_input_size = input_shape[0]*input_shape[1]*input_shape[2] num_classes = 4 cnn_model = Sequential() cnn_model.add(Conv2D(32, (3, 3), padding='same', input_shape=input_shape)) cnn_model.add(Activation('relu')) cnn_model.add(MaxPooling2D(pool_size=(2, 2))) cnn_model.add(Conv2D(64, (3, 3))) cnn_model.add(Activation('relu')) cnn_model.add(MaxPooling2D(pool_size=(2, 2))) cnn_model.add(Dropout(0.25)) cnn_model.add(Conv2D(128, (3, 3), padding='same')) cnn_model.add(Activation('relu')) cnn

How to calculate the total number of parameters in a convolutional neural network?

允我心安 提交于 2019-11-28 06:08:30
问题 how to calculate the total number of params in a CNN network here is the code: input_shape = (32, 32, 1) flat_input_size = input_shape[0]*input_shape[1]*input_shape[2] num_classes = 4 cnn_model = Sequential() cnn_model.add(Conv2D(32, (3, 3), padding='same', input_shape=input_shape)) cnn_model.add(Activation('relu')) cnn_model.add(MaxPooling2D(pool_size=(2, 2))) cnn_model.add(Conv2D(64, (3, 3))) cnn_model.add(Activation('relu')) cnn_model.add(MaxPooling2D(pool_size=(2, 2))) cnn_model.add