conv-neural-network

Convolutional Neural Network - Dropout kills performance

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-07 03:58:12
问题 I'm building a convolutional neural network using Tensorflow (I'm new with both), in order to recognize letters. I've got a very weird behavior with the dropout layer : if I don't put it (ie. keep_proba at 1), it performs quite well and learns (see Tensorboard screenshots of accuracy and loss below, with training in blue and testing in orange). However, when I put the dropout layer during the training phase (I tried at 0.8 and 0.5), the network learns nothing : loss falls quickly around 3 or

CNNs on Keras converge to the same value no matter the input

谁说我不能喝 提交于 2020-01-06 14:20:00
问题 I've been learning Keras recently and I tried my hand at the CIFAR10 dataset with CNNs. However, the model I trained (you can run the code here) returns the same answer for every input, no matter what. Did I forget something in the model definition? 回答1: You have forgotten to normalize the images. Currently, the values in x_train are in the range [0,255] . This causes large gradient updates and stalls training process. One simple normalization scheme in this case would be: x_train = x_train

How to get properly the output's shape when converting code from tf.nn.conv2d_transpose to tf.keras.layers.Conv2dTransopose

浪子不回头ぞ 提交于 2020-01-06 10:01:17
问题 I am converting some codes from tensorflow to tf.keras. I am used to upsample using tf.nn.conv2d_transpose which receive a output_shape parameter and everything works fine. When I swap into keras models I start using the Conv2DTranspose layer, but I can't get the desired output shape. For sake of simplicity lets assume my input shape is (38,25), then I use maxpooling operation with kernel shape=(2,2) and strides=(2,2) which output a (19,13) volume. Now is when the 'deconvolution' is applied

How to understand the Cifar10 prediction output?

十年热恋 提交于 2020-01-06 06:59:13
问题 I have trained Cifar10 (caffe) model for two classes classification. Pedestrian and non-pedestrian. Training looks fine, I have updated weights in a caffemodel file. I used two labels 1 for pedestrians and 2 for non-pedestrians, together with images for pedestrians (64 x 160) and background images (64 x 160). After training, I do testing with positive image(pedestrian image) and negative image (background image). My testing prototxt file is as shown below name: "CIFAR10_quick_test" layers {

tensorflow, image segmentation convnet InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600

陌路散爱 提交于 2020-01-06 06:47:15
问题 I am trying to segment images from the BRATS challenge. I am using U-net in a combination of these two repositories: https://github.com/zsdonghao/u-net-brain-tumor https://github.com/jakeret/tf_unet When I try to output the prediction statistics a mismatch shape error come up: InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600 [[Node: Reshape_2 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]

tensorflow, image segmentation convnet InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600

笑着哭i 提交于 2020-01-06 06:46:07
问题 I am trying to segment images from the BRATS challenge. I am using U-net in a combination of these two repositories: https://github.com/zsdonghao/u-net-brain-tumor https://github.com/jakeret/tf_unet When I try to output the prediction statistics a mismatch shape error come up: InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600 [[Node: Reshape_2 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]

Multi-Classification NN with Keras error

℡╲_俬逩灬. 提交于 2020-01-06 06:02:25
问题 I am getting an error when trying to do multi-classification with three classes. Error: TypeError: fit_generator() got multiple values for argument 'steps_per_epoch' Code Giving Error: NN.fit_generator( train_set, train_labels, steps_per_epoch=(train_samples/ batch_size), epochs=epochs, validation_data=(validation_set, validation_labels), validation_steps=(validation_samples / batch_size)) Full Code: https://pastebin.com/V1YwJW3X I would GREATLY appreciate any help with the issue, as I am at

Image Recognition with Scalar output using CNN MXnet in R

泪湿孤枕 提交于 2020-01-06 05:28:53
问题 So I am trying to use image recognition using the mxnet package in R using a CNN to try and predict a scalar output (in my case wait time) based on the image. However, when I do this, I get the same resultant output (it predicts the same number which is probably just the average of all of the results). How do I get it to predict the scalar output correctly. Also, my image has already been pre-processed by greyscaling it and converting into the pixel format below. I am essentially using images

How to do cross-validation with multiple input data in CNN model with Keras

爱⌒轻易说出口 提交于 2020-01-05 08:26:23
问题 My dataset consists of time series(10080) and other descriptive statistics features(85) joint into one row . DataFrame is 921 x 10166 . The data looks something like this, with last 2 columns as Y (labels). id x0 x1 x2 x3 x4 x5 ... x10079 mean var ... Y0 Y1 1 40 31.05 25.5 25.5 25.5 25 ... 33 24 1 1 0 2 35 35.75 36.5 26.5 36.5 36.5 ... 29 31 2 0 1 3 35 35.70 36.5 36.5 36.5 36.5 ... 29 25 1 1 0 4 40 31.50 23.5 24.5 26.5 25 ... 33 29 3 0 1 ... 921 40 31.05 25.5 25.5 25.5 25 ... 23 33 2 0 1 I

Caffe net.forward call for multiple batches

拜拜、爱过 提交于 2020-01-05 07:18:05
问题 I am using ImageData type of data in .prototxt file and trying to get the features from python code using net.forward() and net.blobs of caffe library. However, I get only 50 features after net.forward() call which is the batch_size which I have set in .prototxt file. How can I get the features for subsequent batches? Do I have to call net.forward() multiple times? 来源: https://stackoverflow.com/questions/48520103/caffe-net-forward-call-for-multiple-batches