conv-neural-network

Convolution Neural Networks Intuition - Difference in outcome between high kernel filter size vs high number of features

…衆ロ難τιáo~ 提交于 2021-02-11 12:46:12
问题 I wanted to understand architectural intuition behind the differences of: tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)) and tf.keras.layers.Conv2D(32, (7,7), activation='relu', input_shape=(28, 28, 1)) Assuming, As kernel size increases, more complex feature-pattern matching can be performed in the convolution step. As feature size increases, a larger variance of smaller features can define a particular layer. How and when (if possible kindly give scenarios) do

Can not squeeze dim[1], expected a dimension of 1, got 5

。_饼干妹妹 提交于 2021-02-11 09:41:29
问题 I tried different solutions but still facing the issue. Actually I am new in Ml/DL (python). In which case we face this error "Can not squeeze dim1, expected a dimension of 1, got 5"? Please help me here, what I am doing wrong here and what is correct Here is InvalidArgumentError Traceback (most recent call last) --------------------------------------------------------------------------- <ipython-input-9-0826122252c2> in <module>() 98 model.summary() 99 model.compile(loss='sparse_categorical

Result changes every time I run Neural Network code

烈酒焚心 提交于 2021-02-10 22:47:28
问题 I got the results by running the code provided in this link Neural Network – Predicting Values of Multiple Variables. I was able to compute losses accuracy etc. However, every time I run this code, I get a new result. Is it possible to get the same (consistent) result? 回答1: The code is full of random.randint() everywhere! Furthermore, the weights are most of the time randomly set aswell, and the batch_size also has an influence (although pretty minor) in the result. Y_train, X_test, X_train

Result changes every time I run Neural Network code

别等时光非礼了梦想. 提交于 2021-02-10 22:47:23
问题 I got the results by running the code provided in this link Neural Network – Predicting Values of Multiple Variables. I was able to compute losses accuracy etc. However, every time I run this code, I get a new result. Is it possible to get the same (consistent) result? 回答1: The code is full of random.randint() everywhere! Furthermore, the weights are most of the time randomly set aswell, and the batch_size also has an influence (although pretty minor) in the result. Y_train, X_test, X_train

Limiting probability percentage of irrelevant image in CNN

时间秒杀一切 提交于 2021-02-10 14:51:40
问题 I am training a cnn model with five classes using keras library. Using model.predict function i get prediction percentage of the classes. My problem is for a image which doesn't belong to these classes and completely irrelevant, the predict class still predicts the percentages according to the classes. How do I prevent it? How do I identify it as irrelevant? 回答1: I assume you are using a softmax activation on your last layer to generate the probabilities for each class. By definition, the sum

CNN loss with multiple outputs?

寵の児 提交于 2021-02-10 14:33:08
问题 I have the following model def get_model(): epochs = 100 learning_rate = 0.1 decay_rate = learning_rate / epochs inp = keras.Input(shape=(64, 101, 1), name="inputs") x = layers.Conv2D(128, kernel_size=(3, 3), strides=(3, 3), padding="same")(inp) x = layers.Conv2D(256, kernel_size=(3, 3), strides=(3, 3), padding="same")(x) x = layers.Flatten()(x) x = layers.Dense(150)(x) x = layers.Dense(150)(x) out1 = layers.Dense(40000, name="sf_vec")(x) out2 = layers.Dense(128, name="ls_weights")(x) model =

PyTorch model prediction fail for single item

生来就可爱ヽ(ⅴ<●) 提交于 2021-02-08 08:52:31
问题 I use PyTorch and transfer learning to train mobilenet_v2 based classifier. I use a batch of 20 images during training and my test accuracy is ~80%. I try to use the model with single image for individual prediction and output is a wrong class. At the same time if I will take a batch from my test dataset and insert my single image in it instead of element 0 it will have a correct prediction. Prediction 0 will be a correct class. So model works for a batch but not for an individual item. If I

How to design a shared weight, multi input/output Auto-Encoder network?

自古美人都是妖i 提交于 2021-02-08 07:47:09
问题 I have two different types of images (camera image and it's corresponding sketch). The goal of the network is to find the similarity between both images. The network consists of a single encoder and a single decoder. The motivation behind the single encoder-decoder is to share the weights between them. input_img = Input(shape=(img_width,img_height, channels)) def encoder(input_img): # Photo-Encoder Code pe = Conv2D(96, kernel_size=11, strides=(4,4), padding = 'SAME')(left_input) # (?, 64, 64,

How to design a shared weight, multi input/output Auto-Encoder network?

て烟熏妆下的殇ゞ 提交于 2021-02-08 07:44:56
问题 I have two different types of images (camera image and it's corresponding sketch). The goal of the network is to find the similarity between both images. The network consists of a single encoder and a single decoder. The motivation behind the single encoder-decoder is to share the weights between them. input_img = Input(shape=(img_width,img_height, channels)) def encoder(input_img): # Photo-Encoder Code pe = Conv2D(96, kernel_size=11, strides=(4,4), padding = 'SAME')(left_input) # (?, 64, 64,

Is it possible to create multiple instances of the same CNN that take in multiple images and are concatenated into a dense layer? (keras)

岁酱吖の 提交于 2021-02-08 07:21:35
问题 Similar to this question, I'm looking to have several image input layers that go through one larger CNN (e.g. XCeption minus dense layers), and then have the output of the one CNN across all images be concatenated into a dense layer. Is this possible with Keras or is it even possible to train a network from the ground-up with this architecture? I'm essentially looking to train a model that takes in a larger but fixed number of images per sample (i.e. 3+ image inputs with similar visual