conv-neural-network

Realtime Data augmentation in Lasagne

若如初见. 提交于 2019-12-12 09:09:49
问题 I need to do realtime augmentation on my dataset for input to CNN, but i am having a really tough time finding suitable libraries for it. I have tried caffe but the DataTransform doesn't support many realtime augmentations like rotating etc. So for ease of implementation i settled with Lasagne . But it seems that it also doesn't support realtime augmentation. I have seen some posts related to Facial Keypoints detection where he's using Batchiterator of nolearn.lasagne . But i am not sure

Dimensions in convolutional neural network

心不动则不痛 提交于 2019-12-12 08:48:55
问题 I am trying to understand how the dimensions in convolutional neural network behave. In the figure below the input is 28-by-28 matrix with 1 channel. Then there are 32 5-by-5 filters (with stride 2 in height and width). So I understand that the result is 14-by-14-by-32. But then in the next convolutional layer we have 64 5-by-5 filters (again with stride 2). So why the result is 7-by-7- by 64 and not 7-by-7-by 32*64? Aren't we applying each one of the 64 filters to each one of the 32 channels

Keras: Vanishing parameters in Conv2D layer within Lambda function

左心房为你撑大大i 提交于 2019-12-12 05:27:17
问题 I am defining a Lambda layer with a function that uses the Conv2D layer. def lambda_func(x,k): y = Conv2D(k, (3,3), padding='same')(x) return y And calling it using k = 64 x = Conv2D(k, (3,3), data_format='channels_last', padding='same', name='block1_conv1')(inputs) y = Lambda(lambda_func, arguments={'k':k}, name = 'block1_conv1_loc')(x) But in model.summary() , the lambda layer is showing no parameters! _________________________________________________________________ Layer (type) Output

Selecting number of strides and filters in CNN (Keras)

大兔子大兔子 提交于 2019-12-12 04:55:23
问题 I am using keras to build a cnn model for signal classification. What is the best way in keras for hyper parameter tuning and selection for the number of strides,and number filters. 回答1: Welcome to main question of deep learning. There is no valid, single solution which fits to all problems. There are some patterns though, like starting with few filters in early layers and increase filter count while reducing the sizes. For you, the best would be to start reading existing architectures like

how can i load a directory of png in tensorflow?

﹥>﹥吖頭↗ 提交于 2019-12-12 03:53:01
问题 i have a directory of png files . there is a train folder and test folder . In the train folder i have 10 folders as 10 labels [ 0 -9 ] .Each folder contains png files of that label . I want to load them in tensor flow for training . I am new in tensor flow i am having a very hard time getting this done i am using anaconda ( py ver 3.5 ) import tensorflow as tf filename_queue = tf.train.string_input_producer( tf.train.match_filenames_once("./images/*.jpg")) image_reader = tf.WholeFileReader()

Tensorflow Deep Learning - model size and parameters

断了今生、忘了曾经 提交于 2019-12-12 02:44:17
问题 According to Andrej's blog - Where he says that for a Convolutional Layer, with parameter sharing, it introduces F x F x D weights per filter, for a total of (F x F x D) x K weights and K biases. In my tensorflow code, I have an architecture like this (where D=1) conv1 : F = 3, K = 32, S = 1, P = 1. pool1 : conv2 and so on... According to the formula, A model generated with F=3 for conv1 should have 9K weights ,i.e. smaller model , and A model generated with F=5 should have 25K weights i.e.

Convolutional Neural Networks with Caffe and NEGATIVE IMAGES

一个人想着一个人 提交于 2019-12-12 00:26:43
问题 When training a set of classes (let's say #clases (number of classes) = N) on Caffe Deep Learning (or any CNN framework) and I make a query to the caffemodel , I get a % of probability of that image could be OK. So, let's take a picture of a similar Class 1, and I get the result: 1.- 90% 2.- 10% rest... 0% the problem is: when I take a random picture (for example of my environment), I keep getting the same result , where one of the class is predominant (>90% probability) but it doesn't belong

How do I reduce 4096-dimensional feature vector to 1024-dimensional vector in CNN Caffemodel?

余生长醉 提交于 2019-12-11 22:07:18
问题 I used 16-layers VGGnet to extract features from an image. It outputs a 4096-dimensional feature vector. However, I need a 1024-dimensional vector. How do I further reduce this 4096-vector into 1024-vector? Do I need to add a new layer on top of fc7 ? 回答1: Yes, you need to add another layer on top of fc7 . This is how your last few layers should be like layers { bottom: "fc7" top: "fc7" name: "relu7" type: RELU } layers { bottom: "fc7" top: "fc7" name: "drop7" type: DROPOUT dropout_param {

How to connect Convlolutional layer with LSTM in tensorflow Keras

混江龙づ霸主 提交于 2019-12-11 19:45:25
问题 I'm experimenting with architecture of neural network and I try to connect 2D convolution to LSTM cell in tensorflow Keras. Here is my original model: model = Sequential() model.add(CuDNNLSTM(256, input_shape=(train_x.shape[1:]), return_sequences=True)) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Dense(64, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(4, activation='softmax')) It works like magic. train_x is 1209 sequences, each set has 23 numbers and

Keras.layers.concatenate generates an error'

半世苍凉 提交于 2019-12-11 19:25:36
问题 I am trying to train a CNN with two input branches. And these two branches (b1, b2) are to be merged into a densely connected layer of 256 neurons with dropout rate of 0.25. This is what I have so far: batch_size, epochs = 32, 3 ksize = 2 l2_lambda = 0.0001 ### My first model(b1) b1 = Sequential() b1.add(Conv1D(128*2, kernel_size=ksize, activation='relu', input_shape=( xtest.shape[1], xtest.shape[2]), kernel_regularizer=keras.regularizers.l2(l2_lambda))) b1.add(Conv1D(128*2, kernel_size=ksize