conv-neural-network

Does the dropout layer need to be defined in deploy.prototxt in caffe?

[亡魂溺海] 提交于 2019-12-07 10:01:37
问题 In the AlexNet implementation in caffe, I saw the following layer in the deploy.prototxt file: layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } Now the key idea of dropout is to randomly drop units (along with their connections) from the neural network during training. Does this mean that I can simply delete this layer from deploy.prototxt, as this file is meant to be used during testing only? 回答1: Yes. Dropout is not required during

Convolutional Neural Networks: How many pixels will be covered by each of the filters?

半腔热情 提交于 2019-12-07 08:40:49
问题 How can I calculate the area (in the original image) covered by each of the filters in my network? e.g. Lets say the size of the image is WxW pixels. I am using the following network: layer 1 : conv : 5x5 layer 2 : pool : 3x3 layer 3 : conv : 5x5 ..... layer N : conv : 5x5 I want to calculate how much area in the original image will be covered by each filter. e.g. the filter in the layer 1 will cover 5x5 pixels in the original image. 回答1: A similar problem would be, how many pixels will be

Multi-feature causal CNN - Keras implementation

99封情书 提交于 2019-12-07 07:14:50
问题 I'm currently using a basic LSTM to make regression predictions and I would like to implement a causal CNN as it should be computationally more efficient. I'm struggling to figure out how to reshape my current data to fit the causal CNN cell and represent the same data/timestep relationship as well as what the dilation rate should be set at. My current data is of this shape: (number of examples, lookback, features) and here's a basic example of the LSTM NN I'm using right now. lookback = 20 #

Convolutional2D Siamese Network in Keras

懵懂的女人 提交于 2019-12-07 03:23:44
问题 I'm trying to use Keras's Siamese layer in conjunction with a shared Convolution2D layer. I don't need the input to pass through any other layers before the Siamese layer but the Siamese layer requires that input layers be specified. I can't figure out how to create the input layers to match the input of the conv layer. The only concrete example of the Siamese layer being used I could find is in the tests where Dense layers (with vector inputs) are used as input. Basically, I want an input

ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4

筅森魡賤 提交于 2019-12-07 02:59:22
问题 I am building a prediction model for the sequence data using conv1d layer provided by Keras. This is how I did model= Sequential() model.add(Conv1D(60,32, strides=1, activation='relu',padding='causal',input_shape=(None,64,1))) model.add(Conv1D(80,10, strides=1, activation='relu',padding='causal')) model.add(Dropout(0.25)) model.add(Conv1D(100,5, strides=1, activation='relu',padding='causal')) model.add(MaxPooling1D(1)) model.add(Dropout(0.25)) model.add(Dense(300,activation='relu')) model.add

Multiple pretrained networks in Caffe

痴心易碎 提交于 2019-12-07 00:33:33
问题 Is there a simple way (e.g. without modifying caffe code) to load wights from multiple pretrained networks into one network? The network contains some layers with same dimensions and names as both pretrained networks. I am trying to achieve this using NVidia DIGITS and Caffe. EDIT : I thought it wouldn't be possible to do it directly from DIGITS, as confirmed by answers. Can anyone suggest a simple way to modify the DIGITS code to be able to select multiple pretrained networks? I checked the

Keras and input shape to Conv1D issues

筅森魡賤 提交于 2019-12-06 16:14:05
问题 First off, I am very new to Neural Nets and Keras. I am trying to create a simple Neural Network using Keras where the input is a time series and the output is another time series of same length (1 dimensional vectors). I made dummy code to create random input and output time series using a Conv1D layer. The Conv1D layer then outputs 6 different time series (because I have 6 filters) and the next layer I define to add all 6 of those outputs into one which is the output to the entire network.

TensorFlow tfrecords: tostring() changes dimension of image

女生的网名这么多〃 提交于 2019-12-06 15:33:55
问题 I have built a model to train a convolutional autoencoder in TensorFlow. I followed the instructions on Reading Data from the TF documentation to read in my own images of size 233 x 233 x 3. Here is my convert_to() function adapted from those instructions: def convert_to(images, name): """Converts a dataset to tfrecords.""" num_examples = images.shape[0] rows = images.shape[1] cols = images.shape[2] depth = images.shape[3] filename = os.path.join(FLAGS.tmp_dir, name + '.tfrecords') print(

Finetune a Torch model

﹥>﹥吖頭↗ 提交于 2019-12-06 15:33:42
问题 I have loaded a model in Torch and I would like to fine-tune it. For now I'd like to retrain the last 2 layers of the network (though in the future I may want to add layers). How can I do this? I have been looking for tutorials, but I haven't found what I am looking for. Any tips? 回答1: I don't know if I understood what you are asking for. If you want to leave the net as it was except for the 2 layers you want to train (or fine-tune) you have to stop the backpropagation on the ones you don't

How to predict multiple images in Keras at a time using multiple-processing (e.g. with different CPUs)?

老子叫甜甜 提交于 2019-12-06 14:42:39
问题 I have a lot of PNG images that I want to classify, using a trained CNN model. To speed up the process, I would like to use multiple-processing with CPUs (I have 72 available, here I'm just using 4). I don't have a GPU available at the moment, but if necessary, I could get one. My workflow: read a figure with openCV adapt shape and format use mymodel.predict(img) to get the probability for each class When it comes to the prediction step, it never finishes the mymodel.predict(img) step. When I