neural-network

Where can I find the label map between trained model like googleNet's output to there real class label?

老子叫甜甜 提交于 2019-12-19 08:54:13
问题 everyone, I am new to caffe. Currently, I try to use the trained GoogleNet which was downloaded from model zoo to classify some images. However, the network's output seem to be a vector rather than real label(like dog, cat). Where can I find the label-map between trained model like googleNet's output to their real class label? Thanks. 回答1: If you got caffe from git you should find in data/ilsvrc12 folder a shell script get_ilsvrc_aux.sh . This script should download several files used for

Are there any computational efficiency differences between nn.functional() Vs nn.sequential() in PyTorch

﹥>﹥吖頭↗ 提交于 2019-12-19 07:56:12
问题 The following is a Feed-forward network using the nn.functional() module in PyTorch import torch.nn as nn import torch.nn.functional as F class newNetwork(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64,10) def forward(self,x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.softmax(self.fc3(x)) return x model = newNetwork() model The following is the same Feed-forward using nn.sequential() module to

Are there any computational efficiency differences between nn.functional() Vs nn.sequential() in PyTorch

女生的网名这么多〃 提交于 2019-12-19 07:55:22
问题 The following is a Feed-forward network using the nn.functional() module in PyTorch import torch.nn as nn import torch.nn.functional as F class newNetwork(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64,10) def forward(self,x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.softmax(self.fc3(x)) return x model = newNetwork() model The following is the same Feed-forward using nn.sequential() module to

Can I send callbacks to a KerasClassifier?

巧了我就是萌 提交于 2019-12-19 05:19:14
问题 I want the classifier to run faster and stop early if the patience reaches the number I set. In the following code it does 10 iterations of fitting the model. import numpy import pandas from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.wrappers.scikit_learn import KerasClassifier from keras.callbacks import EarlyStopping, ModelCheckpoint from keras.constraints import maxnorm from keras.optimizers import SGD from sklearn.model

using leaky relu in Tensorflow

孤者浪人 提交于 2019-12-19 05:11:22
问题 How can I change G_h1 = tf.nn.relu(tf.matmul(z, G_W1) + G_b1) to leaky relu? I have tried looping over the tensor using max(value, 0,01*value) but I get TypeError: Using a tf.Tensor as a Python bool is not allowed. I also tried to find the source code on relu on Tensorflow github so that I can modify it to leaky relu but I couldn't find it.. 回答1: You could write one based on tf.relu , something like: def lrelu(x, alpha): return tf.nn.relu(x) - alpha * tf.nn.relu(-x) EDIT Tensorflow 1.4 now

How to input multiple N-D arrays to a net in caffe?

蓝咒 提交于 2019-12-19 05:05:05
问题 I want to create a custom loss layer for semantic segmentation in caffe that requires multiple inputs. I wish this loss function to have an additional input factor in order to penalize the miss detection in small objects. To do that I have created an image GT that contains for each pixel a weight. If the pixel belongs to a small object the weight is high. I am newbie in caffe and I do not know how to feed my net with three 2-D signals at the same time (image, gt-mask and the per-pixel weights

Tensorflow reshaping a tensor

一笑奈何 提交于 2019-12-19 04:47:12
问题 I'm trying to use tf.nn.sparse_softmax_cross_entropy_with_logits and I have followed the answer by user Olivier Moindrot [here][1] but I'm getting a dimension error I'm building a segmentation network, so the input image is 200x200 and the output image is 200x200. The classification is binary, so foreground and background. After I build the CNN pred = conv_net(x, weights, biases, keep_prob) pred looks like this <tf.Tensor 'Add_1:0' shape=(?, 40000) dtype=float32> The CNN has a couple of conv

Tensorflow reshaping a tensor

泪湿孤枕 提交于 2019-12-19 04:47:08
问题 I'm trying to use tf.nn.sparse_softmax_cross_entropy_with_logits and I have followed the answer by user Olivier Moindrot [here][1] but I'm getting a dimension error I'm building a segmentation network, so the input image is 200x200 and the output image is 200x200. The classification is binary, so foreground and background. After I build the CNN pred = conv_net(x, weights, biases, keep_prob) pred looks like this <tf.Tensor 'Add_1:0' shape=(?, 40000) dtype=float32> The CNN has a couple of conv

XOR Neural Network in Java

一曲冷凌霜 提交于 2019-12-19 02:06:22
问题 I'm trying to implement and train a five neuron neural network with back propagation for the XOR function in Java. My code (please excuse it's hideousness): public class XORBackProp { private static final int MAX_EPOCHS = 500; //weights private static double w13, w23, w14, w24, w35, w45; private static double theta3, theta4, theta5; //neuron outputs private static double gamma3, gamma4, gamma5; //neuron error gradients private static double delta3, delta4, delta5; //weight corrections private

Subtract mean from image

北战南征 提交于 2019-12-18 21:53:40
问题 I'm implementing a CNN with Theano. In the paper, I have to do this image preprocess before train the CNN We extracted RGB patches of 61x61 dimensions associated with each poselet activation, subtracted the mean and used this data to train the convnet model shown in Table 1 Can you tell me what does it mean with "subtracted the mean"? Tell me if these steps are correct (it is what I understood) 1) Compute the mean for Red Channel, Green Channel and Blue Channel for the whole image 2) For each