neural-network

neural network cost doesn't decrease

大憨熊 提交于 2020-05-17 07:45:06
问题 When i run my neural network the cost always stays around its initial value. sometime it increases and sometimes it decreases but never by much. I tried some different seeds but this didn't work. I also tried with different input but this also didn't work. I looked over the math for the training but I couldn't find anything wrong with it. class NeuralNet(): def __init__(self): np.random.seed(1) self.w0 = 2*np.random.random((3, 4))-1 self.w1 = 2*np.random.random((4, 1))-1 def sigmoid(self, x):

How inverting the dropout compensates the effect of dropout and keeps expected values unchanged?

南笙酒味 提交于 2020-05-16 04:42:25
问题 I'm learning regularization in Neural networks from deeplearning.ai course. Here in dropout regularization, the professor says that if dropout is applied, the calculated activation values will be smaller then when the dropout is not applied (while testing). So we need to scale the activations in order to keep the testing phase simpler. I understood this fact, but I don't understand how scaling is done. Here is a code sample which is used to implement inverted dropout. keep_prob = 0.8 # 0 <=

Multiple inputs with Keras Functional API

陌路散爱 提交于 2020-05-16 03:17:28
问题 It seems that Keras lacks documentation regarding functional API but I might be getting it all wrong. I have multiple independent inputs and I want to predict an output for each input. Here's my code so far: hour = Input(shape=(1,1), name='hour') hour_output = LSTM(1, name='hour_output')(hour) port = Input(shape=(1,1), name='port') port_output = LSTM(1, name='port_output')(port) model = Model(inputs=[hour, port], outputs = [hour_output, port_output]) model.compile(loss="mean_squared_error",

executing multiple models in tensorflow with a single session

前提是你 提交于 2020-05-15 08:56:33
问题 I'm trying to run several models of neural networks in tensorflow in parallel, each model is independent of the rest. It is necessary to create a session for each of the executions I launch with tensorflow or I could reuse the same session for each of the models ?. Thank you 回答1: A session is linked to a specific Tensorflow Graph instance. If you want to have one session for all, you need to put all your models in the same graph. This may cause you naming problems for tensors and is IMO

Hierarchical Attention Network - model.fit generates error 'ValueError: Input dimension mis-match'

ぐ巨炮叔叔 提交于 2020-05-14 21:28:13
问题 For background, I am referring to the Hierarchical Attention Network used for sentiment classification. For code : my full code is posted below, but it is just simple revision of the original code posted by the author on the link above. And I explain my changes below. For training data : here For word embeddings : this is the Glove embedding here Key config : Keras 2.0.9, Scikit-Learn 0.19.1, Theano 0.9.0 The original code posted in the link above takes a 3D shape input, i.e., (review,

how to keep the values of tensors in each epoch in one layer and pass it to Next epoch in tensorflow

时光总嘲笑我的痴心妄想 提交于 2020-05-12 08:52:28
问题 I have a general question. I am developing a new layer to incorporate into an autoencoder. To be more specific, the layer is something like the KCompetitive class over here. What I want is that I need to save the output of this layer in a variable let's call it previous_mat_values , and then pass it to that same layer in the next epoch as well. To put it another way, I want to be able to save the output of this layer of epoch 1 in one variable, and then in epoch 2 , again use that same matrix

Validation loss when using Dropout

末鹿安然 提交于 2020-05-11 07:20:28
问题 I am trying to understand the effect of dropout on validation Mean Absolute Error (non-linear regression problem). Without dropout With dropout of 0.05 With dropout of 0.075 Without any dropouts the validation loss is more than training loss as shown in 1. My understanding is that the validation loss should only be slightly more than the training loss for a good fit. Carefully, I increased the dropout so that validation loss is close to the training loss as seen in 2. The dropout is only

Validation loss when using Dropout

情到浓时终转凉″ 提交于 2020-05-11 07:19:47
问题 I am trying to understand the effect of dropout on validation Mean Absolute Error (non-linear regression problem). Without dropout With dropout of 0.05 With dropout of 0.075 Without any dropouts the validation loss is more than training loss as shown in 1. My understanding is that the validation loss should only be slightly more than the training loss for a good fit. Carefully, I increased the dropout so that validation loss is close to the training loss as seen in 2. The dropout is only

Reshaping Keras layers

允我心安 提交于 2020-05-10 04:27:18
问题 I have an input image 416x416. How can I create an output of 4 x 10, where 4 is number of columns and 10 the number of rows? My label data is 2D array with 4 columns and 10 rows. I know about the reshape() method but it requires that the resulted shape has same number of elements as the input. With 416 x 416 input size and max pools layers I can get max 13 x 13 output. Is there a way to achieve 4x10 output without loss of data? My input label data looks like for example like [[ 0 0 0 0] [ 0 0

Reshaping Keras layers

荒凉一梦 提交于 2020-05-10 04:24:39
问题 I have an input image 416x416. How can I create an output of 4 x 10, where 4 is number of columns and 10 the number of rows? My label data is 2D array with 4 columns and 10 rows. I know about the reshape() method but it requires that the resulted shape has same number of elements as the input. With 416 x 416 input size and max pools layers I can get max 13 x 13 output. Is there a way to achieve 4x10 output without loss of data? My input label data looks like for example like [[ 0 0 0 0] [ 0 0