neural-network

is error value incorrect for output neurons?

孤街浪徒 提交于 2019-12-25 05:33:26
问题 I use the fully connected neural network for image recognition "mnist". My network has 784 input neurons, one hidden layer of neurons consists of 1569 neurons, an output layer consists of 10 ones. I have two questions: I use sigmoid and formula for error error = output * (1 - output) * (target - output). The problem is that if the output neuron is 1, and the required value is 0, that error = 0, but it's wrong, is n't? Is it right to use sigmoid if weighted sum of neurons in the hidden layer

is error value incorrect for output neurons?

亡梦爱人 提交于 2019-12-25 05:33:10
问题 I use the fully connected neural network for image recognition "mnist". My network has 784 input neurons, one hidden layer of neurons consists of 1569 neurons, an output layer consists of 10 ones. I have two questions: I use sigmoid and formula for error error = output * (1 - output) * (target - output). The problem is that if the output neuron is 1, and the required value is 0, that error = 0, but it's wrong, is n't? Is it right to use sigmoid if weighted sum of neurons in the hidden layer

Matlab -Neural Network Simulation (for Loop)

旧巷老猫 提交于 2019-12-25 04:58:16
问题 I am quiet new to matlab NN toolbox and have created the following NN network: val.P=Exp; net =newff(minmax(p),[20,3],{'tansig','purelin'},'trainlm'); net.trainParam.epochs = 5000; %Max Ephocs net.trainParam.goal = 1e-5; %Training Goal in Mean Sqared Error net.trainParam.min_grad = 0.05e-3; net.trainParam.show = 50; %# of ephocs in display net.trainParam.max_fail =20; net = init(net); [net,tr]=train(net,p,t,[],[],val); o1 = sim(net,Exp) How can I run the above for say 20 times and store the

tensorflow nn mnist example with modified layer dimensions

戏子无情 提交于 2019-12-25 04:34:22
问题 I modified this mnist example so that it has two outputs and a middle layer of 10 nodes. It doesn't work, giving me a .50 score all the time. I think it just picks one of the outputs and responds with that no matter what the input is. How could I fix this so that it actually does some learning? The outputs are supposed to represent 0 for 'skin tone' and 1 for 'no skin tone'. I use png input. def nn_setup(self): input_num = 784 * 3 # like mnist but with three channels mid_num = 10 output_num =

Consisten results with Multiple runs of h2o deeplearning

早过忘川 提交于 2019-12-25 04:23:01
问题 For a certain combination of parameters in the deeplearning function of h2o, I get different results each time I run it. args <- list(list(hidden = c(200,200,200), loss = "CrossEntropy", hidden_dropout_ratio = c(0.1, 0.1,0.1), activation = "RectifierWithDropout", epochs = EPOCHS)) run <- function(extra_params) { model <- do.call(h2o.deeplearning, modifyList(list(x = columns, y = c("Response"), validation_frame = validation, distribution = "multinomial", l1 = 1e-5,balance_classes = TRUE,

Calculating weights in a NN

倖福魔咒の 提交于 2019-12-25 03:38:26
问题 So I am currently trying to implement my first NN with a genetic algorithm for training and a sigmoid activation function. It's all good but I'm not quite sure in what ranges the weights must be. I've searched some about the question but with no luck. How does one choose the ranges of the weights in a NN? What does it depend on? 回答1: The weights can be seen as an intrinsic property of the problem you're trying to solve using the GA/NN approach; there's no general best value fo these, so you

Simple Linear Neural Network Weights from Training are not compatible with training results

一曲冷凌霜 提交于 2019-12-25 03:26:31
问题 The weights that I get from training, when implied directly on input, return different results! I'll show it on a very simple example let's say we have an input vector x= 0:0.01:1; and target vector t=x^2 (I know it better to use non linear network) after training, 2 layer, linear network, with one neuron at each layer, we get: sim(net,0.95) = 0.7850 (some error in training - that's ok and should be) weights from net.IW,net.LW,net.b: IW = 0.4547 LW = 2.1993 b = 0.3328 -1.0620 if I use the

how to save matlab neural networks toolbox generated figures

被刻印的时光 ゝ 提交于 2019-12-25 02:58:06
问题 In the matlab workspace the output/results can be easily saved. But when I train the network with some data to see the performance of the training (In Neural Network Toolbox), the regression plots along with the histograms and performance plots can not be saved as a figure file.currently i am using snipping tools to capture them. My Question is how to do that? Is there any options to save those plots(generated in Maltab Neural Network toolbox)? I would be grateful to have any codes/ answers

Dropout entire input layer

空扰寡人 提交于 2019-12-25 02:34:48
问题 Suppose I have two inputs (each with a number of features), that I want to feed into a Dropout layer. I want each iteration to drop out a whole input, with all of its associated features, and keep the whole of the other input. After concatenating the inputs, I think I need to use the noise_shape parameter for Dropout , but the shape of the concatenated layer doesn't really let me do that. For two inputs of shape (15,), the concatenated shape is (None, 30), rather than (None, 15, 2), so one of

Keras on Google ML Engine error: You must feed a value for placeholder tensor

故事扮演 提交于 2019-12-25 02:29:14
问题 I have deployed a model on Google Cloud ML Engine, but when I try to perform a prediction (I'm using curl), this is the result I obtain: {"error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\"You must feed a value for placeholder tensor 'lstm_1/keras_learning_phase' with dtype bool\n\t [[Node: lstm_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"]()]]\")"} How