conv-neural-network

CNN architecture: classifying “good” and “bad” images

时光总嘲笑我的痴心妄想 提交于 2021-01-28 05:11:58
问题 I'm researching the possibility of implementing a CNN in order to classify images as "good" or "bad" but am having no luck with my current architecture. Characteristics that denote a "bad" image: Overexposure Oversaturation Incorrect white balance Blurriness Would it be feasible to implement a neural network to classify images based on these characteristics or is it best left to a traditional algorithm that simply looks at the variance in brightness/contrast throughout an image and classifies

understanding conv net layers

梦想与她 提交于 2021-01-28 02:40:21
问题 I've been reading about Conv nets and I've programmed a few models myself. When I see visual diagrams of other models it shows each layer being smaller and deeper than the last ones. Layers have 3 dimensions like 256x256x32. What is this third number? I assume the first two numbers are the number of nodes but I don't know what the depth is. 回答1: TL;DR: 256x256x32 refers to the layer's output shape rather than the layer itself. There are many articles and posts out there explaining how

Using weights initializer with tf.nn.conv2d

两盒软妹~` 提交于 2021-01-27 16:43:33
问题 When using tf.layers.conv2d , setting the initializer is easy, it can be done through its parameter. But what if I use tf.nn.conv2d ? I use this code. Is this equivalent to setting the kernel_initializer parameter in tf.layers.conv2d ? Although the program runs without errors, I don't know how to verify whether it does what it is expected do. with tf.name_scope('conv1_2') as scope: kernel = tf.get_variable(initializer=tf.contrib.layers.xavier_initializer(), shape=[3, 3, 32, 32], name='weights

Using weights initializer with tf.nn.conv2d

↘锁芯ラ 提交于 2021-01-27 16:38:07
问题 When using tf.layers.conv2d , setting the initializer is easy, it can be done through its parameter. But what if I use tf.nn.conv2d ? I use this code. Is this equivalent to setting the kernel_initializer parameter in tf.layers.conv2d ? Although the program runs without errors, I don't know how to verify whether it does what it is expected do. with tf.name_scope('conv1_2') as scope: kernel = tf.get_variable(initializer=tf.contrib.layers.xavier_initializer(), shape=[3, 3, 32, 32], name='weights

Does tf.keras.layers.Conv1D support RaggedTensor input?

ⅰ亾dé卋堺 提交于 2021-01-27 13:04:47
问题 In the tensorflow conv1D layer documentation, it says that; 'When using this layer as the first layer in a model, provide an input_shape argument (tuple of integers or None, e.g. (10, 128) for sequences of 10 vectors of 128-dimensional vectors, or (None, 128) for variable-length sequences of 128-dimensional vectors.' So I understand that we can input variable length sequences but when I use a ragged tensor input for conv1D layer, it gives me an error: ValueError: Layer conv1d does not support

Training with dropout

守給你的承諾、 提交于 2021-01-27 12:46:15
问题 How are the many thinned layers resulting from dropout averaged? And which weights are to be used during the testing stage? I'm really confused about this one. Because each thinned layers would learn a different set of weights. So backpropagation is done separately for each of the thinned networks? And how exactly are weights shared among these thinned networks? Because at testing time only one neural network is used and one set of weights. So which set of weights are used? It is said that a

Keras Python Multi Image Input shape error

橙三吉。 提交于 2021-01-27 11:53:28
问题 I am trying to teach myself to build a CNN that takes more than one image as an input. Since the dataset I created to test this is large and in the long run I hope to solve a problem involving a very large dataset, I am using a generator to read images into arrays which I am passing to Keras Model's fit_generator function. When I run my generator in isolation it works fine, and produces outputs of the appropriate shape. It yields a tuple containing two entries, the first of which has shape (4

How to implement RBF activation function in Keras?

时光怂恿深爱的人放手 提交于 2021-01-27 06:47:48
问题 I am creating a customized activation function, RBF activation function in particular: from keras import backend as K from keras.layers import Lambda l2_norm = lambda a,b: K.sqrt(K.sum(K.pow((a-b),2), axis=0, keepdims=True)) def rbf2(x): X = #here i need inputs that I receive from previous layer Y = # here I need weights that I should apply for this layer l2 = l2_norm(X,Y) res = K.exp(-1 * gamma * K.pow(l2,2)) return res The function rbf2 receives the previous layer as input: #some keras

Keras model.predict always 0

人走茶凉 提交于 2021-01-27 06:00:55
问题 I am using keras applications for transfer learning with resnet 50 and inception v3 but when predicting always get [[ 0.]] The below code is for a binary classification problem. I have also tried vgg19 and vgg16 but they work fine, its just resnet and inception. The dataset is a 50/50 split. And I am only changing the model = applications.resnet50.ResNet50 line of code for each model. below is the code: from keras.callbacks import EarlyStopping early_stopping = EarlyStopping(monitor='val_loss

Keras model.predict always 0

帅比萌擦擦* 提交于 2021-01-27 05:58:31
问题 I am using keras applications for transfer learning with resnet 50 and inception v3 but when predicting always get [[ 0.]] The below code is for a binary classification problem. I have also tried vgg19 and vgg16 but they work fine, its just resnet and inception. The dataset is a 50/50 split. And I am only changing the model = applications.resnet50.ResNet50 line of code for each model. below is the code: from keras.callbacks import EarlyStopping early_stopping = EarlyStopping(monitor='val_loss