neural-network

keras model.evaluate() does not show loss

眉间皱痕 提交于 2019-12-13 14:25:11
问题 I've created a neural network of the following form in keras : from keras.layers import Dense, Activation, Input from keras import Model input_dim_v = 3 hidden_dims=[100, 100, 100] inputs = Input(shape=(input_dim_v,)) net = inputs for h_dim in hidden_dims: net = Dense(h_dim)(net) net = Activation("elu")(net) outputs = Dense(self.output_dim_v)(net) model_v = Model(inputs=inputs, outputs=outputs) model_v.compile(optimizer='adam', loss='mean_squared_error', metrics=['mse']) Later, I train it on

Caret nnet: logloss not working for twoClassSummary

时光怂恿深爱的人放手 提交于 2019-12-13 13:40:45
问题 I have a training dataset Out Revolver Ratio Num ... 0 1 0.766127 0.802982 0 ... 1 0 0.957151 0.121876 1 2 0 0.658180 0.085113 0 3 0 0.233810 0.036050 3 4 1 0.907239 0.024926 5 The outcome variable Out is binary and only takes on the values 0 or 1. Num is not a factor I then attempted to run nnet using caret . I want to eventually try nnGrid but I just want to make sure this works first: nnTrControl=trainControl(method = "cv", classProbs = TRUE, summaryFunction = twoClassSummary, number = 2

Different image dimensions during training and testing time for FCNs

早过忘川 提交于 2019-12-13 13:36:53
问题 I am reading multiple conflicting Stackoverflow posts and I'm really confused on what the reality is. My question is the following. If I trained an FCN on 128x128x3 images, is it possible to feed an image of size 256x256x3 , or B) 128x128 , or C) neither since the inputs have to be the same during training and testing? Consider SO post #1. In this post, it suggests that the images have to be the same dimensions during input and output. This makes sense to me. SO post #2: In this post, it

Caffe predicts same class regardless of image

ⅰ亾dé卋堺 提交于 2019-12-13 13:18:31
问题 I modified the MNIST example and when I train it with my 3 image classes it returns an accuracy of 91%. However, when I modify the C++ example with a deploy prototxt file and labels file, and try to test it on some images it returns a prediction of the second class (1 circle) with a probability of 1.0 no matter what image I give it - even if it's images that were used in the training set. I've tried a dozen images and it consistently just predicts the one class. To clarify things, in the C++

Tensorflow: show or save forget gate values in LSTM

亡梦爱人 提交于 2019-12-13 13:12:58
问题 I am using the LSTM model that comes by default in tensorflow. I would like to check or to know how to save or show the values of the forget gate in each step, has anyone done this before or at least something similar to this? Till now I have tried with tf.print but many values appear (even more than the ones I was expecting) I would try plotting something with tensorboard but I think those gates are just variables and not extra layers that I can print (also cause they are inside the TF

Multilayer perceptron - backpropagation

北城以北 提交于 2019-12-13 12:21:58
问题 I have a school project to program multilayer perceptron that classify data into three classes. I have implemented backpropagation algorithm from http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html. I have checked my algorithm (by manually calculating each step of backpropagation) if it really meets this explained steps and it meets. For classifing I am using one-hot code and I have inputs consisting of vectors with 2 values and three output neurons (each for individual class). After

How can we define one-to-one, one-to-many, many-to-one, and many-to-many LSTM neural networks in Keras? [duplicate]

Deadly 提交于 2019-12-13 12:20:04
问题 This question already has answers here : Many to one and many to many LSTM examples in Keras (2 answers) Closed last year . I am reading this article (The Unreasonable Effectiveness of Recurrent Neural Networks) and want to understand how to express one-to-one, one-to-many, many-to-one, and many-to-many LSTM neural networks in Keras. I have read a lot about RNN and understand how LSTM NNs work, in particular vanishing gradient, LSTM cells, their outputs and states, sequence output and etc.

Interpret the output of neural network in matlab

三世轮回 提交于 2019-12-13 10:36:23
问题 I have build a neural network model, with 3 classes. I understand that the best output for a classification process is the boolean 1 for a class and boolean zeros for the other classes , for example the best classification result for a certain class, where the output of a classifire that lead on how much this data are belong to this class is the first element in a vector is [1 , 0 , 0]. But the output of the testing data will not be like that,instead it will be a rational numbers like [2.4 ,

Optimizing number of optimum features

孤街醉人 提交于 2019-12-13 10:23:47
问题 I am training neural network using Keras. Every time I train my model, I use slightly different set of features selected using Tree-based feature selection via ExtraTreesClassifier() . After training every time, I compute the AUCROC on my validation set and then go back in a loop to train the model again with different set of feature. This process is very inefficient and I want to select the optimum number of features using some optimization technique available in some python library. The

matlab syntax errors in single layer neural network [closed]

我的梦境 提交于 2019-12-13 10:11:48
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 4 years ago . I have to implement a single layer neural network or perceptron.For this, I have 2 files data sets , one for the input and one for the output.I have to do this in matlab without using neural toolbox.The format of 2 files is given below. In: 0.832 64.643 0.818 78.843 1.776 45.049 0.597 88.302 1.412 63.458 Out: 0