neural-network

Setting hidden layers and neurons in neuralnet and caret (R)

≯℡__Kan透↙ 提交于 2020-07-23 06:32:09
问题 I would like to cross-validate a neural network using the package neuralnet and caret . The data df can be copied from this post. When running the neuralnet() function, there is an argument called hidden where you can set the hidden layers and neurons in each. Let's say I want 2 hidden layers with 3 and 2 neurons respectively. It would be written as hidden = c(3, 2) . However, as I want to cross-validate it, I decided to use the fantastic caret package. But when using the function train() , I

Setting hidden layers and neurons in neuralnet and caret (R)

早过忘川 提交于 2020-07-23 06:31:19
问题 I would like to cross-validate a neural network using the package neuralnet and caret . The data df can be copied from this post. When running the neuralnet() function, there is an argument called hidden where you can set the hidden layers and neurons in each. Let's say I want 2 hidden layers with 3 and 2 neurons respectively. It would be written as hidden = c(3, 2) . However, as I want to cross-validate it, I decided to use the fantastic caret package. But when using the function train() , I

Setting hidden layers and neurons in neuralnet and caret (R)

心已入冬 提交于 2020-07-23 06:30:42
问题 I would like to cross-validate a neural network using the package neuralnet and caret . The data df can be copied from this post. When running the neuralnet() function, there is an argument called hidden where you can set the hidden layers and neurons in each. Let's say I want 2 hidden layers with 3 and 2 neurons respectively. It would be written as hidden = c(3, 2) . However, as I want to cross-validate it, I decided to use the fantastic caret package. But when using the function train() , I

Does epoch size need to be an exact mutiple of batch size?

…衆ロ難τιáo~ 提交于 2020-07-21 07:37:44
问题 When training a net does it matter if the number of samples in the epoch is not an exact multiple of the batch size? My training code doesnt seem to mind if this is the case, though my loss curve is pretty noisy at the moment (in case that is a related issue). This would be useful to know, as if it is not an issue it saves on messing around with the dataset to make it quantized by batch size. It may also be less wasteful of captured data. 回答1: does it matter if the number of samples in the

Converting Caffe model to CoreML

五迷三道 提交于 2020-07-18 06:09:51
问题 I am working to understand CoreML. For a starter model, I've downloaded Yahoo's Open NSFW caffemodel. You give it an image, it gives you a probability score (between 0 and 1) that the image contains unsuitable content. Using coremltools , I've converted the model to a .mlmodel and brought it into my app. It appears in Xcode like so: In my app, I can successfully pass an image, and the output appears as a MLMultiArray . Where I am having trouble is understanding how to use this MLMultiArray to

BatchNorm momentum convention PyTorch

自闭症网瘾萝莉.ら 提交于 2020-07-17 05:46:04
问题 Is the batchnorm momentum convention (default=0.1) correct as in other libraries e.g. Tensorflow it seems to usually be 0.9 or 0.99 by default? Or maybe we are just using a different convention? 回答1: It seems that the parametrization convention is different in pytorch than in tensorflow, so that 0.1 in pytorch is equivalent to 0.9 in tensorflow. To be more precise: In Tensorflow: running_mean = decay*running_mean + (1-decay)*new_value In PyTorch: running_mean = (1-decay)*running_mean + decay

BatchNorm momentum convention PyTorch

╄→гoц情女王★ 提交于 2020-07-17 05:45:11
问题 Is the batchnorm momentum convention (default=0.1) correct as in other libraries e.g. Tensorflow it seems to usually be 0.9 or 0.99 by default? Or maybe we are just using a different convention? 回答1: It seems that the parametrization convention is different in pytorch than in tensorflow, so that 0.1 in pytorch is equivalent to 0.9 in tensorflow. To be more precise: In Tensorflow: running_mean = decay*running_mean + (1-decay)*new_value In PyTorch: running_mean = (1-decay)*running_mean + decay

How To Determine the 'filter' Parameter in the Keras Conv2D Function

岁酱吖の 提交于 2020-07-16 15:40:11
问题 I'm just beginning my ML journey and have done a few tutorials. One thing that's not clear (to me) is how the 'filter' parameter is determined for Keras Conv2D. Most sources I've read simply set the parameter to 32 without explanation. Is this just a rule of thumb or do the dimensions of the input images play a part? For example, the images in CIFAR-10 are 32x32 Specifically: model = Sequential() filters = 32 model.add(Conv2D(filters, (3, 3), padding='same', input_shape=x_train.shape[1:]))

Creating custom conditional metric with Keras

老子叫甜甜 提交于 2020-07-08 13:50:48
问题 I am trying to create the following metric for my neural network using keras: Custom Keras metric where d=y_{pred}-y_{true} and both y_{pred} and y_{true} are vectors With the following code: import keras.backend as K def score(y_true, y_pred): d=(y_pred - y_true) if d<0: return K.exp(-d/10)-1 else: return K.exp(d/13)-1 For the use of compiling my model: model.compile(loss='mse', optimizer='adam', metrics=[score]) I received the following error code and I have not been able to correct the

Creating custom conditional metric with Keras

白昼怎懂夜的黑 提交于 2020-07-08 13:50:07
问题 I am trying to create the following metric for my neural network using keras: Custom Keras metric where d=y_{pred}-y_{true} and both y_{pred} and y_{true} are vectors With the following code: import keras.backend as K def score(y_true, y_pred): d=(y_pred - y_true) if d<0: return K.exp(-d/10)-1 else: return K.exp(d/13)-1 For the use of compiling my model: model.compile(loss='mse', optimizer='adam', metrics=[score]) I received the following error code and I have not been able to correct the