conv-neural-network

Error while using Conv2DLayer with lasagne NeuralNet

£可爱£侵袭症+ 提交于 2019-12-10 15:55:44
问题 I have windows 8.1 64bit and use recommended here http://deeplearning.net/software/theano/install_windows.html#installing-theano python win-python distribution (python 3.4). I've went through every step of tutorial (excluding CUDA stuff and GPU config), uninstalled everything and did it again but my problem persists. I am trying to build convolutional neural network using Lasagne. Every layer I've tested so far is working - only Conv2DLayer throws errors. Code is as follows: net2 = NeuralNet(

A reusable Tensorflow convolutional Network

一个人想着一个人 提交于 2019-12-10 12:56:50
问题 I want to reuse code from the Tensorflow "MNIST for Pros" CNN example. My images are 388px X 191px, with only 2 output classes. The original code can be found here. I tried to reuse this code by changing the input & output layers ONLY , as shown below: input layer x = tf.placeholder("float", shape=[None, 74108]) y_ = tf.placeholder("float", shape=[None, 2]) x_image = tf.reshape(x, [-1,388,191,1]) output layer W_fc2 = weight_variable([1024, 2]) b_fc2 = bias_variable([2]) Running the modified

Caffe accuracy bigger than 100%

微笑、不失礼 提交于 2019-12-10 11:04:17
问题 I'm building one but, and when I use the custom train function provided on lenet example with a batch size bigger than 110 my accuracy gets bigger than 1 (100%). If I use batch size 32, I get 30 percent of accuracy. Batch size equal 64 my net accuracy is 64. And batch size equal to 128, the accuracy is 1.2. My images are 32x32. Train dataset: 56 images of Neutral faces. 60 images of Surprise faces. Test dataset: 15 images of Neutral faces. 15 images of Surprise faces. This is my code: def

Changing the input data layer during training in Caffe

我只是一个虾纸丫 提交于 2019-12-10 10:41:08
问题 Is it possible to change the input source of ImageData layer or a MemoryData layer on the fly? I am trying to shuffle the data every epoch but I have both image and some other non-image features that I want to concatenate at a later stage in the network. I could not find a reliable way to shuffle both the image and my other data in a way that preserves the alignment of the two. So, I am thinking of re-generating the imagelist.txt as well as nonimage data (in Memory) every epoch and attach the

keras cnn_lstm input layer not accepting 1-D input

南楼画角 提交于 2019-12-10 10:23:55
问题 I have sequences of long 1_D vectors (3000 digits) that I am trying to classify. I have previously implemented a simple CNN to classify them with relative success: def create_shallow_model(shape,repeat_length,stride): model = Sequential() model.add(Conv1D(75,repeat_length,strides=stride,padding='same', input_shape=shape, activation='relu')) model.add(MaxPooling1D(repeat_length)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=

Caffe classification labels in HDF5

北城余情 提交于 2019-12-10 10:12:17
问题 I am finetuning a network. In a specific case I want to use it for regression, which works. In another case, I want to use it for classification. For both cases I have an HDF5 file, with a label. With regression, this is just a 1-by-1 numpy array that contains a float. I thought I could use the same label for classification, after changing my EuclideanLoss layer to SoftmaxLoss. However, then I get a negative loss as so: Iteration 19200, loss = -118232 Train net output #0: loss = 39.3188 (* 1

keras loss function for 360 degree prediction

霸气de小男生 提交于 2019-12-10 04:33:41
问题 I'm trying to predict azimuths using keras/tensorflow. y_true ranges from 0-359, but I need a loss function that handles predictions that have wrapped around and are outside that range. Unfortunately, when I try any kind of modular division tf.mod() or % , i get an error... LookupError: No gradient defined for operation 'FloorMod' (op type: FloorMod) so I think I've worked around this with the following... def mean_squared_error_360(y_true, y_pred): delta = K.minimum(K.minimum(K.abs(y_pred -

Incompatible shapes on tensorflow.equal() op for correct predictions evaluation

天大地大妈咪最大 提交于 2019-12-10 04:21:40
问题 Using the MNIST tutorial of Tensorflow, I try to make a convolutional network for face recognition with the "Database of Faces". The images size are 112x92, I use 3 more convolutional layer to reduce it to 6 x 5 as adviced here I'm very new at convolutional network and most of my layer declaration is made by analogy to the Tensorflow MNIST tutorial, it may be a bit clumsy, so feel free to advice me on this. x_image = tf.reshape(x, [-1, 112, 92, 1]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1

How do you change the dimension of your input pictures in pytorch?

元气小坏坏 提交于 2019-12-09 23:08:07
问题 i made a convolutional nuralnetwork and i want it to take input pictures and output pictures but when i turn the pictures into tensors they have the wrong dimension : RuntimeError: Expected 4-dimensional input for 4-dimensional weight [20, 3, 5, 5], but got 3-dimensional input of size [900, 1440, 3] instead how do i change the dimension of the pictures ? and why does it need to be changed? and how do i make the output an picture? i tryed to use transform = transforms.Compose( [transforms

How to access kernel variables in tf.layers.conv2d?

你。 提交于 2019-12-09 13:23:06
问题 I want to visualize weights in convolutional layers to watch how they change. But I can not find a way to access weights in convolutional layers in tf.layers.conv2d Thank you 回答1: You could access that variable by name: weights = sess.run('<name_of_your_layer>/weights:0', feed_dict=...) If you're unsure about the name of your variable, see what it could be by printing tf.trainable_variables() 回答2: With inspiration from this: How to get CNN kernel values in Tensorflow Make sure to give it a