neural-network

What is “Parameter” layer in caffe?

倖福魔咒の 提交于 2019-12-29 09:05:13
问题 Recently I came across "Parameter" layer in caffe. It seems like this layer exposes its internal parameter blob to "top". What is this layer using for? Can you give a usage example? 回答1: This layer was introduced in the pull request #2079, with the following description: This layer simply holds a parameter blob of user-defined shape, and shares it as its single top. which is exactly what you expected. This was introduced in context of the issue #1474, which basically proposes to treat

matlab neural network toolbox

走远了吗. 提交于 2019-12-29 08:24:08
问题 I used the matlab neural network to train on some data but I want to run this neural network in c++ program,how to do that? 回答1: You can use ML to generate your feature set (input layer) and then use an open source C++ NN implementation to do training/classification. (E.g., http://takinginitiative.net/2008/04/23/basic-neural-network-tutorial-c-implementation-and-source-code/) If you want to use ML to train and C++ to classify it shouldn't be too difficult to write some additional code to

TensorFlow: AttributeError: 'Tensor' object has no attribute 'shape'

匆匆过客 提交于 2019-12-29 07:26:41
问题 I have the following code which uses TensorFlow. After I reshape a list, it says AttributeError: 'Tensor' object has no attribute 'shape' when I try to print its shape. # Get the shape of the training data. print "train_data.shape: " + str(train_data.shape) train_data = tf.reshape(train_data, [400, 1]) print "train_data.shape: " + str(train_data.shape) train_size,num_features = train_data.shape Output: train_data.shape: (400,) Traceback (most recent call last): File "", line 1, in File "/home

How should “BatchNorm” layer be used in caffe?

人盡茶涼 提交于 2019-12-29 06:44:08
问题 I am a little confused about how should I use/insert "BatchNorm" layer in my models. I see several different approaches, for instance: ResNets: "BatchNorm" + "Scale" (no parameter sharing) "BatchNorm" layer is followed immediately with "Scale" layer: layer { bottom: "res2a_branch1" top: "res2a_branch1" name: "bn2a_branch1" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "res2a_branch1" top: "res2a_branch1" name: "scale2a_branch1" type: "Scale" scale_param {

Predicting next word using the language model tensorflow example

自古美人都是妖i 提交于 2019-12-29 06:43:12
问题 The tensorflow tutorial on language model allows to compute the probability of sentences : probabilities = tf.nn.softmax(logits) in the comments below it also specifies a way of predicting the next word instead of probabilities but does not specify how this can be done. So how to output a word instead of probability using this example? lstm = rnn_cell.BasicLSTMCell(lstm_size) # Initial state of the LSTM memory. state = tf.zeros([batch_size, lstm.state_size]) loss = 0.0 for current_batch_of

Why the BIAS is necessary in ANN? Should we have separate BIAS for each layer?

时间秒杀一切 提交于 2019-12-29 05:06:51
问题 I want to make a model which predicts the future response of the input signal, the architecture of my network is [3, 5, 1]: 3 inputs, 5 neurons in the hidden layer, and 1 neuron in output layer. My questions are: Should we have separate BIAS for each hidden and output layer? Should we assign weight to BIAS at each layer (as BIAS becomes extra value to our network and cause the over burden the network)? Why BIAS is always set to one? If eta has different values, why we don't set the BIAS with

Why the BIAS is necessary in ANN? Should we have separate BIAS for each layer?

二次信任 提交于 2019-12-29 05:06:06
问题 I want to make a model which predicts the future response of the input signal, the architecture of my network is [3, 5, 1]: 3 inputs, 5 neurons in the hidden layer, and 1 neuron in output layer. My questions are: Should we have separate BIAS for each hidden and output layer? Should we assign weight to BIAS at each layer (as BIAS becomes extra value to our network and cause the over burden the network)? Why BIAS is always set to one? If eta has different values, why we don't set the BIAS with

how to implement custom metric in keras?

不羁岁月 提交于 2019-12-29 04:14:28
问题 I get this error : sum() got an unexpected keyword argument 'out' when I run this code: import pandas as pd, numpy as np import keras from keras.layers.core import Dense, Activation from keras.models import Sequential def AUC(y_true,y_pred): not_y_pred=np.logical_not(y_pred) y_int1=y_true*y_pred y_int0=np.logical_not(y_true)*not_y_pred TP=np.sum(y_pred*y_int1) FP=np.sum(y_pred)-TP TN=np.sum(not_y_pred*y_int0) FN=np.sum(not_y_pred)-TN TPR=np.float(TP)/(TP+FN) FPR=np.float(FP)/(FP+TN) return((1

Understanding tf.extract_image_patches for extracting patches from an image

烈酒焚心 提交于 2019-12-29 03:31:31
问题 I found the following method tf.extract_image_patches in tensorflow API, but I am not clear about its functionality. Say the batch_size = 1 , and an image is of size 225x225x3 , and we want to extract patches of size 32x32 . How exactly does this function behave? Specifically, the documentation mentions the dimension of the output tensor to be [batch, out_rows, out_cols, ksize_rows * ksize_cols * depth] , but what out_rows and out_cols are is not mentioned. Ideally, given an input image

try to simulate neural network in Matlab by myself

一世执手 提交于 2019-12-29 01:44:33
问题 I tried to create a neural network to estimate y = x ^ 2. So I created a fitting neural network and gave it some samples for input and output. I tried to build this network in c++. but the result is different than I expected. With the following inputs: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12